top scientist
Big tech has distracted world from existential risk of AI, says top scientist
Big tech has succeeded in distracting the world from the existential risk to humanity that artificial intelligence still poses, a leading scientist and AI campaigner has warned. Speaking with the Guardian at the AI Summit in Seoul, South Korea, Max Tegmark said the shift in focus from the extinction of life to a broader conception of safety of artificial intelligence risked an unacceptable delay in imposing strict regulation on the creators of the most powerful programs. "In 1942, Enrico Fermi built the first ever reactor with a self-sustaining nuclear chain reaction under a Chicago football field," Tegmark, who trained as a physicist, said. "When the top physicists at the time found out about that, they really freaked out, because they realised that the single biggest hurdle remaining to building a nuclear bomb had just been overcome. They realised that it was just a few years away – and in fact, it was three years, with the Trinity test in 1945. "AI models that can pass the Turing test [where someone cannot tell in conversation that they are not speaking to another human] are the same warning for the kind of AI that you can lose control over.
- Asia > South Korea > Seoul > Seoul (0.28)
- North America > United States > Illinois > Cook County > Chicago (0.25)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.06)
- Leisure & Entertainment (0.36)
- Government (0.32)
Top scientist warns AI could surpass human intelligence by 2027 - decades earlier than previously predicted
The computer scientist and CEO who popularized the term'artificial general intelligence' (AGI) believes AI is verging on an exponential'intelligence explosion.' The PhD mathematician and futurist Ben Goertzel made the prediction while closing out a summit on AGI this month: 'It seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years.' 'Once you get to human-level AGI,' Goertzel, sometimes called'father of AGI,' added, 'within a few years you could get a radically superhuman AGI.' While the futurist admitted that he'could be wrong,' he went on to predict that the only impediment to a runaway, ultra-advanced AI -- far more advanced than its human makers -- would be if the bot's'own conservatism' advised caution. Mathematician and futurist Ben Goertzel made the prediction while closing out a summit on AGI las week: 'we could get to human-level AGI within, let's say, the next three to eight years' Goertzel made his predictions during his closing remarks last week at the '2024 Beneficial AI Summit and Unconference,' partially sponsored by his own firm SingularityNET where he is CEO.
- North America > Panama > Panama > Panama City (0.06)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.05)
- Asia > Japan > Kyūshū & Okinawa > Kyūshū > Nagasaki Prefecture > Nagasaki (0.05)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.05)
- Government (0.50)
- Law (0.32)
The Download: OpenAI's top scientist on AGI, and gene therapy to restore hearing
Ilya Sutskever, OpenAI's cofounder and chief scientist, is no longer focusing on building the next generation of his company's flagship generative AI models. Instead his new priority is to figure out how to stop an artificial superintelligence (a hypothetical future technology he sees coming with the foresight of a true believer) from going rogue. A lot of what Sutskever says is wild. But not nearly as wild as it would have sounded just one or two years ago. He thinks ChatGPT just might be conscious (if you squint).
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.96)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.96)
Microsoft has given up 'significant sales' over concerns that the customer will use AI for evil, says a top scientist
Long-time Microsoft scientist Eric Horvitz says that the software company takes AI ethics so seriously, "significant sales have been cut off" because it was concerned that the potential customer would use its technology for no good. Horvitz, a director and technical fellow with Microsoft Research, made his remarks on stage at Carnegie Mellon University's K&L Gates Conference on Ethics and AI on Monday, as originally reported by GeekWire. I got in touch with Microsoft for more clarity on Horvitz's remarks. The company confirmed that Microsoft had never cut off a deal with an existing customer -- Horvitz was referring to the loss of possible revenue from potential customers. "Microsoft may decide to forego the pursuit of business proposals for numerous reasons, including the company's commitment to upholding human rights," a spokesperson tells Business Insider.
Here's how one of Google's top scientists thinks people should prepare for machine learning
Norvig, a former computer scientist as NASA, sympathized with anyone frightened by the prospect. "It is scary," he said. But just as the internal combustion engine ultimately led to the demise of the stagecoach, and also to millions of new jobs, so will these destructive technologies lead to new opportunities that are now unimagined. "It's easy to see jobs disappearing ... [but] it's hard to see the new jobs that will be invented because they don't exist yet. Young people starting on their career path shouldn't necessarily be discouraged by machine learning, or abandon career aspirations because of it, Norvig said.
Here's what the world will be like in 2045, according to DARPA's top scientists
The world is going to be a very different place in 2045. Predicting the future is fraught with challenges, but when it comes to technological advances and forward thinking, experts working at the Pentagon's research agency may be the best people to ask. Launched in 1958, the Defense Advanced Research Projects Agency is behind some of the biggest innovations in the military -- many of which have crossed over to the civilian technology market. These include things like advanced robotics, global-positioning systems, and the internet. It's pretty likely that robots and artificial technology are going to transform a bunch of industries, drone aircraft will continue their leap from the military to the civilian market, and self-driving cars will make your commute a lot more bearable.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Here's what the world will be like in 2045, according to top scientists
The world is going to be a very different place in 2045. Predicting the future is fraught with challenges, but when it comes to technological advances and forward thinking, experts working at the Pentagon's research agency may be the best people to ask. Launched in 1958, the Defense Advanced Research Projects Agency is behind some of the biggest innovations in the military -- many of which have crossed over to the civilian technology market. These include things like advanced robotics, global-positioning systems, and the internet. It's pretty likely that robots and artificial technology are going to transform a bunch of industries, drone aircraft will continue their leap from the military to the civilian market, and self-driving cars will make your commute a lot more bearable.
- North America > United States (0.62)
- Europe (0.05)
- Government > Regional Government > North America Government > United States Government (0.62)
- Government > Military (0.62)
- Health & Medicine > Therapeutic Area (0.54)
- Transportation > Passenger (0.52)